84 research outputs found

    Asymmetry and the Nucleosynthetic Signature of Nearly Edge-Lit Detonation in White Dwarf Cores

    Get PDF
    Most of the leading explosion scenarios for Type Ia supernovae involve the nuclear incineration of a white dwarf star through a detonation wave. Several scenarios have been proposed as to how this detonation may actually occur, but the exact mechanism and environment in which it takes place remain unknown. We explore the effects of an off-center initiated detonation on the spatial distribution of the nucleosynthetic yield products in a toy model -- a pre-expanded near Chandrasekhar-mass white dwarf. We find that a single-point near edge-lit detonation results in asymmetries in the density and thermal profiles, notably the expansion timescale, throughout the supernova ejecta. We demonstrate that this asymmetry of the thermodynamic trajectories should be common to off-center detonations where a small amount of the star is burned prior to detonation. The sensitivity of the yields on the expansion timescale results in an asymmetric distribution of the elements synthesized as reaction products. We tabulate the shift in the center of mass of the various elements produced in our model supernova and find an odd-even pattern for elements past silicon. Our calculations show that off-center single-point detonations in carbon-oxygen white dwarfs are marked by significant composition asymmetries in their remnants which bear potentially observable signatures in both velocity and coordinate space, including an elemental nickel mass fraction which varies by a factor of two to three from one side of the remnant to the other.Comment: 7 pages, 7 figures, accepted for publication in the Astrophysical Journa

    Initiation of the detonation in the gravitationally confined detonation model of Type Ia supernovae

    Full text link
    We study the initiation of the detonation in the gravitationally confined detonation (GCD) model of Type Ia supernovae (SNe Ia). Initiation of the detonation occurs spontaneously in a region where the length scale of the temperature gradient extending from a flow (in which carbon burning is already occurring) into unburned fuel is commensurate to the range of critical length scales which have been derived from 1D simulations that resolve the initiation of a detonation. By increasing the maximum resolution in a truncated cone that encompasses this region, beginning somewhat before initiation of the detonation occurs, we successfully simulate in situ the first gradient-initiated detonation in a whole-star simulation. The detonation emerges when a compression wave overruns a pocket of fuel situated in a Kelvin-Helmholtz cusp at the leading edge of the inwardly directed jet of burning carbon. The compression wave pre-conditions the temperature in the fuel in such a way that the Zel'dovich gradient mechanism can operate and a detonation ensues. We explore the dependence of the length scale of the temperature gradient on spatial resolution and discuss the implications for the robustness of this detonation mechanism. We find that the time and the location at which initiation of the detonation occurs varies with resolution. In particular, initiation of a detonation had not yet occurred in our highest resolution simulation by the time we ended the simulation because of the computational demand it required. We suggest that the turbulent shear layer surrounding the inwardly directed jet provides the most favorable physical conditions, and therefore the most likely location, for initiation of a detonation in the GCD model.Comment: 28 pages, 12 figures, 1 table, accepted to Ap

    Spontaneous Initiation of Detonations in White Dwarf Environments: Determination of Critical Sizes

    Full text link
    Some explosion models for Type Ia supernovae (SN Ia), such as the gravitationally confined detonation (GCD) or the double detonation sub-Chandrasekhar (DDSC) models, rely on the spontaneous initiation of a detonation in the degenerate C/O material of a white dwarf. The length scales pertinent to the initiation of the detonation are notoriously unresolved in multi-dimensional stellar simulations, prompting the use of results of 1D simulations at higher resolution, such as the ones performed for this work, as guidelines for deciding whether or not conditions reached in the higher dimensional full star simulations successfully would lead to the onset of a detonation. Spontaneous initiation relies on the existence of a suitable gradient in self-ignition (induction) times of the fuel, which we set up with a spatially localized non-uniformity of temperature -- a hot spot. We determine the critical (smallest) sizes of such hot spots that still marginally result in a detonation in white dwarf matter by integrating the reactive Euler equations with the hydrodynamics code FLASH. We quantify the dependences of the critical sizes of such hot spots on composition, background temperature, peak temperature, geometry, and functional form of the temperature disturbance, many of which were hitherto largely unexplored in the literature. We discuss the implications of our results in the context of modeling of SNe Ia.Comment: 43 pages, 12 figures, 12 table

    Quantitative blood flow measurement in rat brain with multiphase arterial spin labelling magnetic resonance imaging

    Get PDF
    Cerebral blood flow is an important parameter in many diseases and functional studies that can be accurately measured in humans using arterial spin labelling (ASL) MRI. However, although rat models are frequently used for preclinical studies of both human disease and brain function, rat CBF measurements show poor consistency between studies. This lack of reproducibility is due, partly, to the smaller size and differing head geometry of rats compared to humans, as well as the differing analysis methodologies employed and higher field strengths used for preclinical MRI. To address these issues, we have implemented, optimised and validated a multiphase pseudo-continuous ASL technique, which overcomes many of the limitations of rat CBF measurement. Three rat strains (Wistar, Sprague Dawley and Berlin Druckrey IX) were used, and CBF values validated against gold-standard autoradiography measurements. Label positioning was found to be optimal at 45°, while post-label delay was optimised to 0.55 s. Whole brain CBF measures were 109 ± 22, 111 ± 18 and 100 ± 15 mL/100 g/min by multiphase pCASL, and 108 ± 12, 116 ± 14 and 122 ± 16 mL/100 g/min by autoradiography in Wistar, SD and BDIX cohorts, respectively. Tumour model analysis shows that the developed methods also apply in disease states. Thus, optimised multiphase pCASL provides robust, reproducible and non-invasive measurement of CBF in rats

    Study of the Detonation Phase in the Gravitationally Confined Detonation Model of Type Ia Supernovae

    Full text link
    We study the gravitationally confined detonation (GCD) model of Type Ia supernovae through the detonation phase and into homologous expansion. In the GCD model, a detonation is triggered by the surface flow due to single point, off-center flame ignition in carbon-oxygen white dwarfs. The simulations are unique in terms of the degree to which non-idealized physics is used to treat the reactive flow, including weak reaction rates and a time dependent treatment of material in nuclear statistical equilibrium (NSE). Careful attention is paid to accurately calculating the final composition of material which is burned to NSE and frozen out in the rapid expansion following the passage of a detonation wave over the high density core of the white dwarf; and an efficient method for nucleosynthesis post-processing is developed which obviates the need for costly network calculations along tracer particle thermodynamic trajectories. Observational diagnostics are presented for the explosion models, including abundance stratifications and integrated yields. We find that for all of the ignition conditions studied here, a self regulating process comprised of neutronization and stellar expansion results in final \iso{Ni}{56} masses of ∼\sim1.1\msun. But, more energetic models result in larger total NSE and stable Fe peak yields. The total yield of intermediate mass elements is ∼0.1\sim0.1\msun and the explosion energies are all around 1.5×1051\times10^{51} ergs. The explosion models are briefly compared to the inferred properties of recent Type Ia supernova observations. The potential for surface detonation models to produce lower luminosity (lower \iso{Ni}{56} mass) supernovae is discussed.Comment: 43 pages, 4 tables, 20 figures -- submitted to Ap

    The Medical Segmentation Decathlon

    Full text link
    International challenges have become the de facto standard for comparative assessment of image analysis algorithms. Although segmentation is the most widely investigated medical image processing task, the various challenges have been organized to focus only on specific clinical tasks. We organized the Medical Segmentation Decathlon (MSD)—a biomedical image analysis challenge, in which algorithms compete in a multitude of both tasks and modalities to investigate the hypothesis that a method capable of performing well on multiple tasks will generalize well to a previously unseen task and potentially outperform a custom-designed solution. MSD results confirmed this hypothesis, moreover, MSD winner continued generalizing well to a wide range of other clinical problems for the next two years. Three main conclusions can be drawn from this study: (1) state-of-the-art image segmentation algorithms generalize well when retrained on unseen tasks; (2) consistent algorithmic performance across multiple tasks is a strong surrogate of algorithmic generalizability; (3) the training of accurate AI segmentation models is now commoditized to scientists that are not versed in AI model training

    Derivation of a general three-dimensional crack-propagation law: A generalization of the principle of local symmetry

    Get PDF
    We derive a general crack propagation law for slow brittle cracking, in two and three dimensions, using symmetry, gauge invariance, and gradient expansions. Our derivation provides explicit justification for the ``principle of local symmetry,'' which has been used extensively to describe two dimensional crack growth, but goes beyond that principle to describe three dimensional crack phenomena as well. We also find that there are new materials properties needed to describe the growth of general cracks in three dimensions, besides the fracture toughness and elastic constants previously used to describe cracking.Comment: 31 pages, including several figure

    Practical considerations for measuring the effective reproductive number, Rt.

    Get PDF
    Estimation of the effective reproductive number Rt is important for detecting changes in disease transmission over time. During the Coronavirus Disease 2019 (COVID-19) pandemic, policy makers and public health officials are using Rt to assess the effectiveness of interventions and to inform policy. However, estimation of Rt from available data presents several challenges, with critical implications for the interpretation of the course of the pandemic. The purpose of this document is to summarize these challenges, illustrate them with examples from synthetic data, and, where possible, make recommendations. For near real-time estimation of Rt, we recommend the approach of Cori and colleagues, which uses data from before time t and empirical estimates of the distribution of time between infections. Methods that require data from after time t, such as Wallinga and Teunis, are conceptually and methodologically less suited for near real-time estimation, but may be appropriate for retrospective analyses of how individuals infected at different time points contributed to the spread. We advise caution when using methods derived from the approach of Bettencourt and Ribeiro, as the resulting Rt estimates may be biased if the underlying structural assumptions are not met. Two key challenges common to all approaches are accurate specification of the generation interval and reconstruction of the time series of new infections from observations occurring long after the moment of transmission. Naive approaches for dealing with observation delays, such as subtracting delays sampled from a distribution, can introduce bias. We provide suggestions for how to mitigate this and other technical challenges and highlight open problems in Rt estimation

    The Medical Segmentation Decathlon

    Get PDF
    International challenges have become the de facto standard for comparative assessment of image analysis algorithms given a specific task. Segmentation is so far the most widely investigated medical image processing task, but the various segmentation challenges have typically been organized in isolation, such that algorithm development was driven by the need to tackle a single specific clinical problem. We hypothesized that a method capable of performing well on multiple tasks will generalize well to a previously unseen task and potentially outperform a custom-designed solution. To investigate the hypothesis, we organized the Medical Segmentation Decathlon (MSD) - a biomedical image analysis challenge, in which algorithms compete in a multitude of both tasks and modalities. The underlying data set was designed to explore the axis of difficulties typically encountered when dealing with medical images, such as small data sets, unbalanced labels, multi-site data and small objects. The MSD challenge confirmed that algorithms with a consistent good performance on a set of tasks preserved their good average performance on a different set of previously unseen tasks. Moreover, by monitoring the MSD winner for two years, we found that this algorithm continued generalizing well to a wide range of other clinical problems, further confirming our hypothesis. Three main conclusions can be drawn from this study: (1) state-of-the-art image segmentation algorithms are mature, accurate, and generalize well when retrained on unseen tasks; (2) consistent algorithmic performance across multiple tasks is a strong surrogate of algorithmic generalizability; (3) the training of accurate AI segmentation models is now commoditized to non AI experts

    Evaluation of individual and ensemble probabilistic forecasts of COVID-19 mortality in the United States

    Get PDF
    Short-term probabilistic forecasts of the trajectory of the COVID-19 pandemic in the United States have served as a visible and important communication channel between the scientific modeling community and both the general public and decision-makers. Forecasting models provide specific, quantitative, and evaluable predictions that inform short-term decisions such as healthcare staffing needs, school closures, and allocation of medical supplies. Starting in April 2020, the US COVID-19 Forecast Hub (https://covid19forecasthub.org/) collected, disseminated, and synthesized tens of millions of specific predictions from more than 90 different academic, industry, and independent research groups. A multimodel ensemble forecast that combined predictions from dozens of groups every week provided the most consistently accurate probabilistic forecasts of incident deaths due to COVID-19 at the state and national level from April 2020 through October 2021. The performance of 27 individual models that submitted complete forecasts of COVID-19 deaths consistently throughout this year showed high variability in forecast skill across time, geospatial units, and forecast horizons. Two-thirds of the models evaluated showed better accuracy than a naïve baseline model. Forecast accuracy degraded as models made predictions further into the future, with probabilistic error at a 20-wk horizon three to five times larger than when predicting at a 1-wk horizon. This project underscores the role that collaboration and active coordination between governmental public-health agencies, academic modeling teams, and industry partners can play in developing modern modeling capabilities to support local, state, and federal response to outbreaks
    • …
    corecore